Back Up Next

Chapter 10 *

Memory Management *

Certification Objectives *

Types of Memory *

Conventional Memory *

Extended/High Memory *

Upper Memory *

Virtual Memory & Expanded Memory *

Memory Conflicts and Optimization *

What Is a Memory Conflict? *

How Do Memory Conflicts Happen? *

HIMEM.SYS *

Use of Expanded Memory Blocks (Using Emm386.exe) *

Employing Utilities *

MemMaker and Other Optimization Utilities *

Illegal Operations Occurrences *

Conflicts with 16-Bit Applications/Windows 95 Operations *

From the Field *

Always Check the Swap File Size *

Certification Summary *

Two-Minute Drill *

 

Chapter 10

Memory Management

Certification Objectives

Types of Memory
Memory Conflicts and Optimization

The vagaries of computer memory are often misunderstood. RAM, ROM, PROM, EEPROM, Flash, DIMM, SIMM, DRAM, SRAM, yadda, yadda. Who wouldn’t get lost? Take a deep breath, and let’s try to unravel the mystery a little.

Types of Memory

While physically, memory is packaged in many sizes and shapes, a PC’s central processing unit (CPU) is only able to address two categories of memory. These are labeled physical memory and virtual memory.

There are several kinds of physical memory. A basic division is between random access memory (RAM), which can be both read and written to by the CPU, and programmable read-only memory (PROM a.k.a. ROM). Standard RAM is also known as volatile memory, in that it loses its contents when system power is shut down, while ROM is nonvolatile, its contents remaining unchanged even when power is removed. The primary difference between RAM and ROM is that RAM is readable by any application, while ROM is meant for a single instruction set.

RAM is available in several types. The most popular at present are fast-page RAM and EDO RAM (Extended Data Output). Originally, there were only two types, Dynamic and Static. Dynamic RAM (DRAM) had a disadvantage in speed, and it had to be constantly refreshed by a clocked supply voltage, or it would lose its contents. Static RAM (SRAM) didn’t require the refresh, and was considerably faster (by a factor of five or so), but more than ten times as expensive to make. PC makers chose to stick with DRAM.

EXAM WATCH: Physical memory consists of the hardware that handles memory in a PC. This memory is stored in chips that are either ROM–read only memory chips, or RAM–random access memory.

ROM began as a write-once chip. Instruction code was electrically fed to the chip and retained there permanently. In order to update this type of ROM, the chip must be physically replaced by another chip containing upgraded instruction code. This reduced production costs, but installation costs remained high.

Memory developers quickly realized that upgrades would become a fairly regular occurrence, so they came up with the EPROM chip (erasable programmable read-only memory). The EPROM chip was originally developed to be erased by ultraviolet light, and had a small window on the chip itself. The window was taped over with a small metal-foil cover. Removing the cover and exposing the chip window to ultra-violet would erase it and allow a new instruction set to be loaded. The chips could be changed in this way several times (dozens), but not many (hundreds). Fortunately, ROM is mostly used for specific support of hardware devices, and updates usually are infrequent.

On the heels of the EPROM, came the EEPROM (electrically erasable), which could be erased by applying a specific voltage to the chip. These still usually required removal and insertion into a PROM "burner" to erase and reprogram. Some inventive hardware manufacturers came up with ways of addressing updates onboard their hardware, but these are the exception.

Besides being difficult to reprogram, ROM was slow in comparison to RAM. While still measured in nanoseconds (a millionth of a second), standard RAM access times were under-100 nanoseconds and ROM took 300 or more. A faster version of ROM was finally developed, and labeled Flash memory. While still basically developed as ROM, Flash memory could be addressed and loaded thousands of times. It also came in sub-100 nanosecond modules. This opened up a method for hardware manufacturers to update their individual system code easily.

Most hardware devices now can be updated by running a program from DOS (or some other Operating System) that can erase and reprogram their device, usually in conjunction with a small ROM chip onboard. In case the upload fails, the ROM chip is there as a backup - to continue to seek an operating code set from a specific source in case one is not present in Flash.

Physical memory is usually identified by the actual size, in bytes, of the memory modules, whether they are user-accessible plug-in types or individual chips soldered onto the motherboard or system card. Memory modules remain a moving target in design, as physical dimensions, application, and capacity needs continue to increase.

When the IBM PC first arrived in 1981, memory was quantified in kilobytes (KB) and a standard memory chip had a whopping 4KB. This small memory chip was considered a marvel, compared to memory modules employed by then-existing systems. Still in use at that time were 4KB memory modules powered by vacuum tubes housed in a 6’ x 6’ x 6’ cube that consumed kilowatts (thousands of watts) of electricity.

Today, memory modules come in 2, 4, 8, 16, and 32 Megabyte (MB) modules, and consume milliwatts (1000th of a watt) of power, allowing huge quantities of RAM to be contained in portable computers.

Packaging has also come a long way. At its inception, PC memory came in single-chip form. Each chip had 7 or 8 pins on either side of a 1" x ½" rectangular chip, dubbed a DIP (Dual In-Line Package) chip. There was a chip for each connection on the memory bus (a quantity of eight chips) plus an additional parity chip to ensure memory contents remained unaltered.

In the late eighties, RAM manufacturers standardized on the SIMM (Single Inline Memory Module) format, which consisted of placing the 8+1 individual chips onto a small substrate card roughly 4" x 1" long, which was then mounted in a quick-release connector located on the motherboard. The original 30-pin SIMM format didn’t provide sufficient connector density to allow enough for the ever-expanding amount and types of RAM chips, so in 1993, the SIMM chip was lengthened somewhat and the connection contact points more than doubled to 72-pins. 30-pin SIMMS are typically found in 286, 386, and older 486 Intel CPU systems in 1MB and 4MB sizes. Newer 486s and just about all Pentium series systems use 72-pin SIMMS, which have capacities of up to 32MB per SIMM.

EXAM WATCH: DIP chips are physically soldered to a motherboard. SIMMs are chips that are soldered to a small board that is installed into a slot on the motherboard. Using SIMMs allows the memory to be easily replaced.

Unfortunately, each time the memory module package changed, the motherboard was forced to change as well. This trend of changing format every few years continues today. The newer Pentium II motherboards now use DIMM (Dual Inline Memory Modules) to allow for even greater density and larger capacity in MB. The bottom line is: Be careful in the selection of memory upgrade components. The memory module must match the requirements of the motherboard.

Conventional Memory

From its use of segment/offset addressing, the architecture of the 808X processor family used in the PC allowed 1MB (1024KB) of address space for system memory. Of this, 640KB was set aside for the applications and the additional 384KB was reserved for hardware to use.

The 640KB of physical memory allocated for the operating system, applications, and data to be loaded in comprises system memory. The additional 384KB became reserved memory. The 640KB area, while appearing vast at the outset, quickly filled up.

One of the biggest memory consumers was one of the most popular applications at the time: Lotus software’s 1-2-3 spreadsheet program. 1-2-3strained the 640KB system memory limit to the breaking point. Lotus realized they had to find a solution that allowed the processor to address more than 640KB of RAM, in order for 1-2-3 to be able to deal with more complex spreadsheets. Lotus teamed up with Intel, the processor maker, and Microsoft, who developed the operating system, and came up with the Lotus-Intel-Microsoft (LIM) memory specification. The specification renamed system memory to conventional memory, and defined additional (above the original 1024KB address limit) areas as expanded, extended, and high memory. Even though the 808X processor could only address 1024KB, the newer 80X86 series increased from 16-bit to 32-bit memory addressing, and could use up to 16MB (and soon more) RAM for system memory.

Extended/High Memory

Basically, all memory above the 1024KB line is considered extended memory. As we shall see, this area can be used in several ways. The basic division, though, is that 808X processors cannot access extended memory at all, but 80X86 processors can. The first 64KB of extended memory was roped off as a control area, and labeled the high-memory area (HMA). This is the area where HIMEM.SYS loads from DOS. As part of the LIM spec, extended memory is addressed through a standard called XMS (eXtended Memory Specification).

By the time DOS 6.0 rolled out, DOS could also load a portion of itself in the HMA. Originally, use of the HMA was reserved for only a single application. Once an application loaded there, it was "done." Microsoft found a way to load HIMEM.SYS, create the HMA, unload itself, and then place up to 64KB of its system code in its place. To enable this feature, you must include the line "DOS=HIGH" in the CONFIG.SYS file along with the "DEVICE=HIMEM.SYS" line.

Enabling XMS required additional work. Two standards for accessing XMS were developed. While Microsoft and Lotus shook hands to develop the LIM specification, they were at odds as to how to best deal with XMS. Microsoft developed the DOS Protected Mode Interface (DPMI) specification, while Lotus embraced a different structure that they inherited from their acquisition of the Phar-Lap company. Lotus’s approach was named Virtual Control Program Interface (VCPI), and was totally incompatible with DPMI. This made for a short-term nightmare, as it became impossible to run Lotus 1-2-3, which used VCPI, under Windows 3.1, which used DPMI. Lotus finally conceded, and DPMI effectively has become the standard.

Upper Memory

The HMA is often confused with upper memory. The 384KB of reserved space, which became known as "upper memory" (conventional, below 640KB, being "lower memory"), remained scarcely populated by system boards, and allowed much of it to be used as system memory. In order to take advantage of unused upper memory, memory managers, such as EMM386.EXE, were created. Besides the LIM spec, other vendors, notably Quarterdeck with its QEMM (Quarterdeck Expanded Memory Manager) program, developed ways of digging out every unused portion of the reserved area, and converting it to system memory. On 808X processor machines, any additional space became welcome. The addition of this upper memory to the system memory pool increased the usable system (conventional) memory to as much as 720KB. Of course, this "breathing room" lasted about a week. It was clear that something had to be done to allow 808X processors to remain viable until the newer 80X86 processors could predominate. Enter expanded memory.

Virtual Memory & Expanded Memory

The LIM specification took into account the 808X processor’s inability to address memory beyond 1024 KB through the concept of virtualizing RAM. Virtual memory is memory that the processor has been "tricked" into using as if it were actual physical memory. The LIM folks created a "frame" of "pages," each page (also known as a window) being 16KB in size, that could be located in free areas of the reserved 384KB hardware address space. Upper memory, remember? The pages could be moved one-at-a-time from the reserved area to either extended memory or the hard drive. A small code segment in lower memory looked at every memory address request made by the processor, and adjusted the requested segment/offset value to match a table it created that redefined the asked-for value into a specific page designation. The processor neither knew nor cared about where it got its code from, extended memory or the hard drive. This new type of memory access became the Expanded Memory Specification (EMS).

EMS has a couple of drawbacks when compared to XMS. The paging structure meant that only 64KB could be moved at any one time, and the tabled segment/offset lookups took additional time. What’s more, it required that each application be aware that EMS was available to be used. In contrast, XMS was developed for 32-bit processors to access directly, with only a small HMA driver to enable it. Once enabled, the Operating System (OS) could then parse out memory as needed to all running applications.

All of this "mucking around" with the reserved area (a.k.a. upper memory), added to an already existing problem; two devices trying to use the same memory space–a memory conflict.

When installing new RAM into a 486 PC, the module did not quite fit into the slot. What could be the problem? Some 486 PCs used 30-pin modules, and newer ones used 72-pin modules. The type of SIMM that was bought for the 486 is the wrong type. The manufacturer should be able to provide which type of SIMM is required for that PC.
When installing a new video adapter card, the PC will no longer boot up into Windows 95. What has occurred? This is a hard error. The new adapter card has conflicted with an existing card. This is most likely a conflict with the existing video adapter, which may be part of the system board. The video adapter, if a separate card, must be removed from the PC. Or, if a video on the system board, it must be disabled–usually through the system BIOS.

Memory Conflicts and Optimization

Confused yet? It was all supposed to work just right. By loading application and system code "low," and device drivers "high," everything was supposed to work perfectly. IBM, with its background in systems hardware, defined specific areas in upper memory to be reserved for specific devices to be accessed by the CPU over the system bus, dubbed the ISA (Industry Standard Architecture) bus. They just didn’t foresee how rapidly newer technologies would add a multitude of new devices installed on the bus that would also need to fit into upper memory.

Network interface cards, graphics boards, SCSI adapters, and the like chomped away at already-overloaded upper memory. Worse yet, because there were no defined standards for these new devices, the default settings chosen often overlapped. Installing a new device without knowing how it and already-existing devices were configured to use a "memory footprint" could result in a memory conflict.

Even if you managed to get it to work right, the additional boards ate into those little crannies of memory that were eked out for use by the new memory manager programs. The folks who wrote the memory managers began developing better ways of detecting those areas of upper memory that were free to be used as additional conventional memory.

What Is a Memory Conflict?

Memory conflicts usually show up in one of two ways. If you’re lucky, a memory conflict will wait until a specific device driver tries to load, at which time either that device, along with the other conflicting device, ceases to function, or the system will lock up. If you’re having a bad day, then you’ll install a board that is configured to conflict with something required at startup, such as the display board or hard drive controller, in which case the system will fail to start before the operating system even gets a chance to load. The latter conflicts are usually the easiest to troubleshoot, because they happen immediately upon installation of the offending device, and removal and reconfiguration of it usually meets with success. Conflicts arising from optional devices, such as network cards, can be more difficult to pinpoint.

As long as the boards are populated with switches to make configuration choices, you’re home free. Simply by choosing a new switch setting, you can usually resolve conflicts in one or two tries. Newer boards, however, have gotten away from using configuration switches, thanks to the use of EEPROMS or Flash memory, which can be used to store configuration settings. These boards require you to run an installation/configuration program to adjust them, which makes it hard to fix them if you can’t start up your computer without it locking up due to a memory or interrupt conflict. Should this occur, if you have another PC available, you can try moving the suspect board to it to see if it will allow the PC to boot, then use the configuration program to change the settings, and move the board back to the original PC. If you don’t happen to have an extra PC lying around, then you’re stuck with having to remove all the existing cards (remember, it’s not going to work without a display adapter) and/or disabling any onboard hardware and then trying again. As long as you can boot the PC and run the configuration program, you can then try a different setting, then replace/re-enable the stuff you removed and try again.

Memory conflicts have been a burr in a PC user’s/administrator’s saddle for a long time. IBM tried to address them when they developed the Micro Channel Architecture (MCA) for their next-generation PCs. When installing a new device, a Produce Definition File (.PDF) would be included by the device manufacturer that instructed the system as to all of the device’s available settings. All MCA machines included a setup program that would read the .PDF files of the installed devices and configure them all to defaults that wouldn’t overlap—in theory. It usually worked. Of course, a user was free to specify their own settings, and the setup program would do its utmost to point out any conflicts that the user might have inadvertently chosen.

Competing vendors created a competing bus standard, Extended Industry Standard Architecture (EISA) which had many of the same configuration features as MCA boards, and included a setup program to configure them.

Recently, a majority of vendors have embraced the Peripheral Component InterConnect (PCI) bus. Many PCs are distributed with a combination of PCI/ISA bus slots so that both the new and older adapter boards can be used. PCI was developed to increase the speed of data transfer between a peripheral and the processor. As newer technologies have increased data transfer speeds, the ISA bus became a bottleneck. This is best represented by the difference between an Ethernet 100BaseT (running at 100Megabits per second on the network) PCI network interface card and an Ethernet 100BaseT ISA network interface card. ISA runs at 5Mb/s. PCI runs at 132 Mb/s. In the case of an ISA card, a bottleneck exists on the network interface and reduces performance.

PCI supports bus mastering. An intelligent peripheral can take control (master) the bus in order to accelerate a high-priority task. PCI also allows for concurrency in bus mastering, where the CPU may operate simultaneously with the bus mastering peripherals. This is illustrated by the ability of the CPU to run a mathematical calculation while, at the same time, a network interface card has control of the bus.

Of greatest interest in memory management, PCI supports "Plug and Play"–a specification for automatic configuration of jumper- and switch-free peripherals designed to avoid conflicting settings.

How Do Memory Conflicts Happen?

Besides conflicting hardware, applications themselves can create memory conflicts. The previously listed problems all arise in the reserved area between conflicting hardware. Applications can only reside in system memory. However, occasionally a device will conflict with a portion of upper memory that is being used by a memory manager as system memory. This usually only occurs with peripheral hardware that loads a driver after the memory manager loads. Fortunately, memory managers can be forced to exclude specific areas to avoid conflicts, and through some careful analysis, a majority of memory conflicts can be resolved.

HIMEM.SYS

The Microsoft driver, HIMEM.SYS, is used to address 80286 and 80386 extended memory, converting it to XMS in accordance with the LIM specification. It also takes the first 64KB of this extended memory area and converts it into the HMA. It is loaded by placing a line - DEVICE= path\HIMEM.SYS in the CONFIG.SYS file. Unless you are using a different memory manager, such as QEMM by Quarterdeck, the HIMEM.SYS file must be loaded before EMM386.EXE is loaded so that EMM386 may be used. This also enablesan application, or simply the Operating System, to access to XMS memory.

Use of Expanded Memory Blocks (Using Emm386.exe)

EMM386.EXE performs two major functions. It enables and controls EMS, if desired, and enables the use of upper memory as system memory. It is generally conservative in its attempts to locate available upper memory. You can force EMM386 to use specific regions of upper memory by using the Include switch on the command line where it is enabled in CONFIG.SYS, and conversely exclude specific regions using the Exclude switch.

Employing Utilities

As you can see, trying to resolve memory conflicts can be an incredibly difficult endeavor, because they can come from many sources. Also, "failure-mode analysis" often can’t be performed because once the offending device actually loaded in memory, the system would lock up. Third-party vendors, notably Peter Norton with his Norton Utilities, concentrated on developing a method of checking and reporting on memory allocation usage. Microsoft followed up with its MEM.EXE, MSD.EXE and MEMMAKER.EXE utilities.

MEM.EXE is a simple command line utility that, using various command switches, can display various reports of memory usage. If you aren’t familiar with it, now is a good time to check it out. MEM.EXE is an external command, and should be present in either your DOS directory, or if running Windows 95, in the WINDOWS\COMMAND directory.

C:\WINDOWS>mem /?

Displays the amount of used and free memory in your system.

 

MEM [/CLASSIFY | /DEBUG | /FREE | /MODULE modulename] [/PAGE]

/CLASSIFY or /C Classifies programs by memory usage. Lists the

size of programs, provides a summary of memory

in use, and lists largest memory block

available.

 

/DEBUG or /D Displays status of all modules in memory,

internal drivers, and other information.

 

/FREE or /F Displays information about the amount of free

memory left in both conventional and upper

memory.

 

/MODULE or /M Displays a detailed listing of a module's

memory use. This option must be followed by the

name of a module, optionally separated from /M

by a colon.

 

/PAGE or /P Pauses after each screenful of information.

 

From a DOS prompt, executing MEM.EXE yields a brief list of total memory usage;

C:\WINDOWS>mem

Memory Type Total Used Free

---------------- -------- -------- --------

 

Conventional 640K 60K 580K

Upper 155K 155K 0K

Reserved 384K 384K 0K

Extended (XMS) 64,357K 85K 64,272K

 

---------------- -------- -------- --------

 

Total memory 65,536K 684K 64,852K

Total under 1 MB 795K 214K 580K

Largest executable program size 580K (594,240 bytes)

Largest free upper memory block 0K (0 bytes)

MS-DOS is resident in the high memory area.

 

Notice the last line, indicating that DOS has been loaded "high" using the DOS=HIGH setting in the CONFIG.SYS file. Otherwise, it shows a breakdown of memory allocation by area and total. Nice to know, but not particularly helpful if you’re in trouble.

Using the /C switch gives you quite a bit more to work with;

 

C:\WINDOWS>mem /c

Modules using memory below 1 MB:

Name Total Conventional Upper Memory

 

-------- ---------------- ---------------- ----------------

 

SYSTEM 39,296 (38K) 32,112 (31K) 7,184 (7K)

HIMEM 1,168 (1K) 1,168 (1K) 0 (0K)

EMM386 4,320 (4K) 4,320 (4K) 0 (0K)

DBLBUFF 2,976 (3K) 2,976 (3K) 0 (0K)

WIN 3,776 (4K) 3,776 (4K) 0 (0K)

vmm32 10,560 (10K) 8,944 (9K) 1,616 (2K)

COMMAND 7,504 (7K) 7,504 (7K) 0 (0K)

DRVSPACE 110,592 (108K) 0 (0K) 110,592 (108K)

OAKCDROM 36,064 (35K) 0 (0K) 36,064 (35K)

IFSHLP 2,864 (3K) 0 (0K) 2,864 (3K)

Free 594,256 (580K) 594,256 (580K) 0 (0K)

Memory Summary:

Type of Memory Total Used Free

 

---------------- ----------- ----------- -----------

 

Conventional 655,360 61,104 594,256

Upper 158,320 158,320 0

Reserved 393,216 393,216 0

Extended (XMS) 65,901,968 87,440 65,814,528

 

---------------- ---------- ----------- -----------

 

Total memory 67,108,864 700,080 66,408,784

Total under 1 MB 813,680 219,424 594,256

Largest executable program size 594,240 (580K)

Largest free upper memory block 0 (0K)

MS-DOS is resident in the high memory area.

Notice that the summary is included as before, but now all loaded applications and drivers are detailed by name and individual memory usage. You can see immediately any changes that have been made from "tweaking" load parameters in AUTOEXEC.BAT or CONFIG.SYS. Changing parameters can add or take away free memory, and using MEM.EXE with the /C switch can help diagnose the results.

Usually, results using the /C switch develop results that cover more than a single screen. Another helpful switch, /P can be used in conjunction with /C (or any others) to "page" the results.

Even more details can be obtained by using the /D switch.

Conventional Memory Detail:

Segment Total Name Type

------- ---------------- ----------- --------

00000 1,024 (1K) Interrupt Vector

00040 256 (0K) ROM Communication Area

00050 512 (1K) DOS Communication Area

00070 1,424 (1K) IO System Data

CON System Device Driver

AUX System Device Driver

PRN System Device Driver

CLOCK$ System Device Driver

A: - H: System Device Driver

COM1 System Device Driver

LPT1 System Device Driver

LPT2 System Device Driver

LPT3 System Device Driver

CONFIG$ System Device Driver

COM2 System Device Driver

COM3 System Device Driver

COM4 System Device Driver

000C9 6,704 (7K) MSDOS System Data

0026C 30,528 (30K) IO System Data

1,152 (1K) XMSXXXX0 Installed Device=HIMEM

4,304 (4K) $MMXXXX0 Installed Device=EMM386

2,960 (3K) DblBuff$ Installed Device=DBLBUFF

544 (1K) Sector buffer

16,080 (16K) BUFFERS=30

2,288 (2K) LASTDRIVE=Z

3,072 (3K) STACKS=9,256

009E0 80 (0K) MSDOS System Program

009E5 32 (0K) WIN Data

009E7 320 (0K) WIN Environment

009FB 3,424 (3K) WIN Program

00AD1 48 (0K) vmm32 Data

00AD4 8,896 (9K) vmm32 Program

00D00 336 (0K) COMMAND Data

00D15 5,728 (6K) COMMAND Program

00E7B 1,440 (1K) COMMAND Environment

00ED5 336 (0K) MEM Environment

00EEA 90,464 (88K) MEM Program

02500 503,792 (492K) MSDOS -- Free --

Upper Memory Detail:

Segment Region Total Name Type

------- ------ ---------------- ----------- --------

0C95C 1 156,656 (153K) IO System Data

110,576 (108K) DBLSYSH$ Installed Device=DRVSPACE

36,048 (35K) MSCD001 Installed Device=OAKCDROM

2,848 (3K) IFS$HLP$ Installed Device=IFSHLP

1,200 (1K) Block device tables

5,616 (5K) FILES=100

256 (0K) FCBS=4

0EF9B 1 1,616 (2K) vmm32 Data

Memory Summary:

Type of Memory Total Used Free

---------------- ----------- ----------- -----------

Conventional 655,360 61,104 594,256

Upper 158,320 158,320 0

Reserved 393,216 393,216 0

Extended (XMS) 65,901,968 87,440 65,814,528

---------------- ----------- ----------- -----------

Total memory 67,108,864 700,080 66,408,784

Total under 1 MB 813,680 219,424 594,256

Memory accessible using Int 15h 0 (0K)

Largest executable program size 594,240 (580K)

Largest free upper memory block 0 (0K)

MS-DOS is resident in the high memory area.

XMS version 3.00; driver version 3.95

This shows not only each loaded application, but each memory segment and its corresponding location for each piece of each application.

This tool was a major leap forward for Microsoft to allow a user to visualize changes made in their system. By manually rearranging application and driver loading sequence and related parameters, after much trial and error, you could come up with an optimal configuration that provided the maximum amount of free memory.

Microsoft went one step better when it delivered its MSD.EXE (Microsoft System Diagnostics) program. This little gem roots out almost every conceivable item about your system that you’d ever want to know (and then some!) and displays it in a menu-driven format for you to browse. Conversely, you can run MSD.EXE from the command line, or batch file, and write query results to an output file or printer for a complete analysis using the /P option. An example of the output is included in Appendix G of this book. As you can see, not only is there a full output with statistics equivalent to the MEM.EXE program, but details on the hard drives and their partitioning, and motherboard, video, and networking elements as well. About the only thing missing is the processor speed. Microsoft chose to eliminate MSD.EXE from Windows 95, but in hindsight decided this was a bad idea, and so has posted it as a free download from their Web site.

MemMaker and Other Optimization Utilities

While the MEM.EXE and MSD.EXE programs are excellent tools for memory conflict determination, what was really needed was a tool for problem avoidance. Microsoft delivered this in their MEMMAKER.EXE utilityRunning this program automatically determines the best possible configuration and load sequence for a given set of applications and drivers used. Before using MEMMAKER, the PC should be configured for normal operation (i.e. mouse driver, network operation, sound support, and so forth), including any items that are loaded from the AUTOEXEC.BAT and CONFIG.SYS files.. MEMMAKER would run through hundreds, sometimes thousands of combinations of command load sequencing and placement, then reboot itself to test its new configuration, and if successful, ask the user to accept its determinations.

MEMMAKER’s first version shipped with DOS version 5.0 was not perfected at its release. As often as not, it required multiple runs in order to find a setting that would not hang up the PC when it rebooted. By DOS version 6, MEMMAKER had smoothed out its rough edges and could be counted on to deliver a clean and lean configuration.

Third-party vendors, notably Quarterdeck with its QEMM suite, continued to deliver slightly better, tighter memory configurations, yielding slightly more deliverable RAM. An added benefit to QEMM was its Stealth capabilities that were specifically geared to take advantage of the Micro Channel PC’s ability to move its boards configurations dynamically, allowing even more available RAM to be recouped.

Windows 95 has now pretty much eliminated the need for memory managers. Previous versions of Windows required that networking elements, often the most memory-hungry components, be loaded in DOS prior to Windows startup. A sometimes annoying feature of Windows was that you couldn’t have more memory available for DOS applications run from within Windows than an amount that was slightly less than what you had when you started Windows. Windows 95 now loads virtually everything after it starts, freeing up as much as a full 640KB (or slightly more, if you load DOS high and run EMM386.EXE). OS/2 also found a way to deliver as much as 740KB for each DOS session.

Illegal Operations Occurrences

Every now and then, an error pops up on the screen stating that the program you are running caused an illegal operation and will be shut down. Usually, that error is followed by another stating that there has been a page fault or a general protection fault, and then Windows will crash.

What is an illegal operation? In this case, illegal does not mean against the law, just not allowed by the processor. And illegal operations result from a processor design. The Intel 80X86 (286, 386, 486, Pentium, Pentium Pro, Pentium II) series of processors are designed to interrupt the execution of a program whenever they detect an abnormal condition. Many times, this abnormal condition is an application trying to access a part of upper memory by mistake. That can be due to a conflicting application or device driver that this application did not expect, using that area of upper memory. Or it can be due to an invalid input from a corrupt file. This type of interrupt is called an exception. If you are familiar with Novell’s NetWare server operating system, this error is called an ABEND, which stands for abnormal end–and that pretty much describes what it ends up as.

The operating system normally handles the exception and decides whether to process the exception and return an error, or whether to pass the exception to a handler provided by the application. When the application handles it, you don’t see any errors so you never really know when this happens. When the operating system returns an error, the result usually grows to a termination of the application, and many times, of the operating system itself because files are not closed and hardware is not returned to its pre-application state. This, in turn, may lead to lost work and disk errors. Using DOS CHKDSK command or the SCANDISK command is recommended after an illegal operation to fix the disk errors caused by the files left in an open state. And then, running DEFRAG to defragment the pieces of the files on the hard drive can result in a cleaner FAT table and reduces pesky errors.

Illegal operations can be caused by programs with corrupted files, such as a database program that uses indexes. If you keep getting illegal operations in a database program, find the command that will re-index files or check the database for errors. Bugs in an application can cause illegal operations, too, which can be fixed by applying either operating system or application patches supplied by the manufacturer. And finally, just a glitch in hardware, where a tiny surge in electricity flips a bit (changes it from a "1" to a "0" or vice versa) or two, and the application is doomed. If the glitches occur often and there are all sorts of other error messages, and all other routes (database index, CHKDSK, DEFRAG, applying patches) have been exhausted, this may indicate that RAM or the system board may be going bad. Don’t assume hardware first, though–These errors are nearly always caused by an application error.

Conflicts with 16-Bit Applications/Windows 95 Operations

In Windows 3.1, people used to get memory errors all the time. The errors would state that either there was no memory, or that you were running out of system resources. The first error–no memory–referred to a problem with the pagefile, which was created for Virtual Memory.

In Windows systems, virtual memory is where the operating system allocates more memory than the PC contains, and then pages virtual memory into a file on the hard drive. This means that a PC with 8MB of RAM would look like it had 64MB of RAM if the pagefile was 56MB. The pagefile, also called a swap file, would swap pages of RAM to the 56MB file on the hard drive. It would keep the most recently used memory in RAM, so that the current application would be able to run at optimal speed. The system sounds great, but didn’t work all that well, because the pagefile was static in size and had to be configured by the user of the PC who might not know off the top of their head what size that file should be.

From the Field

Always Check the Swap File Size

Improper swap file size in the real world accounts for a large number of calls to the help desk. Older operating systems, such as Windows 3.1 and Windows for Workgroups, have a horrible problem with memory. The infamous "blue screen of death" was usually attributed to poor memory allocation. Most of the good graphic-intensive programs written for the operating systems would crawl in windows and you would have to run them straight from DOS. Newer operating systems have improved this memory problem, but it still shows up. On Windows NT, if you open too many applications without a large amount of RAM, you still run into out-of-memory problems.

The swap file that is created can sometimes be the cause of the problem instead of just the size of this file. If there is any question about the integrity of the swap file, delete it and reboot to recreate it. A bug in Windows 3.11 caused in some instances a swap file to be put onto a network drive. Imagine this inefficiency: A 10 mbs line reading and writing on a distant server. Always make sure that your swap file is within the limits of your hard drive’s extra memory or you risk your system sending this file to a network drive.

When you look at a personal computer and wonder what is wrong with it, always look for memory swap file relating to signs of trouble, especially if the system is performing poorly compared to other machines that have equal hardware and are running the same software. Memory can also trigger Dr. Watson errors. Even printing errors are suspect. Most memory problems cause the machine to stop responding. Don’t be afraid to increase the swap file; it gives your system more virtual RAM. However, actual RAM may end up to be the ultimate solution in some cases. The larger you make your swap file, the more the hard drive is going to thrash in order to access the memory space.

by Ted Hamilton, MCP, A+ Certified

In Windows 95, 16-bit Windows (Windows 3.1) applications run in the System VM (Virtual Memory Manager). The VM is the Windows 95 handler of the pagefile. The VM adjusts the pagefile size to an optimal amount automatically. Because of the way Windows 16-bit applications used to work in Windows 3.1, they must run in the same shared memory space with the core components of Windows 95, and shared DLLs. If there is an error that a Windows 16-bit application causes to virtual memory, it is the most likely cause of a system-wide error. (Windows 32-bit applications each run in a separate private address space within the System VM, yet the address space is totally separate from the shared address space where core components and 16-bit applications reside. Therefore, Windows 32-bit applications don’t present the problems that Windows 16-bit applications might. MS-DOS-based applications each run in their own VM, completely separate from the System VM.)

To fix these types of errors, the best course of action is to upgrade that 16-bit application with a 32-bit version, if it exists. If that is not possible, then contact the manufacturer and obtain the latest patches and fixes. If there is any question about the way virtual memory is handled, it can be reviewed in the Control Panel System icon under the Performance tab and adjusted in by clicking the Virtual Memory button.

Now remember, the second error that you used to get in Windows 3.1 stated that you were running out of system resources. The only way to get rid of the error was to reboot. System resources degraded over time–some programs would not return all the resources even after they closed, and you simply ran out of them. Often.

Now system resources is rather a cryptic term, because system resources refer to small areas of memory called memory heaps to be used by the graphics device interface (GDI) and user system components. In Windows 3.1, these heaps were 64KB in size and used 16-bit processing. In Windows 95, the heaps became 32-bit, but the 16-bit heaps didn’t go away because of the need for backward-compatibility.

The errors for running out of system resources are rare under Windows 95 because of the 32-bit heaps. But, Windows 16-bit applications using the 16-bit heaps can still cause this error. Especially when that application was one of the programs that behaved badly and wouldn’t give back the system resources when it was done with them. The only way to fix system resource errors in the short term is to reboot the PC. The best long-term fix is to contact the manufacturer for a patch or upgrade the application to a 32-bit version.

Certification Summary

Memory management is not an easy game to play, nor is resolving memory conflicts and optimization issues. The various flavors of MS-DOS and Windows have each brought with them new concerns with regards to memory management. Hopefully, you leave this chapter with a better understanding of memory placement, how applications and hardware play nicely together, and proper optimization techniques.

Two-Minute Drill

Standard RAM is also known as volatile memory, in that it loses its contents when system power is shut down, while ROM is nonvolatile, its contents remaining unchanged even when power is removed.
You can force EMM386 to use specific regions of upper memory by using the Include switch on the command line where it is enabled in CONFIG.SYS, and conversely exclude specific regions using the Exclude switch.
ROM began as a write-once chip. Instruction code was electrically fed to the chip and retained there permanently.
The EEPROM (electrically erasable) chip could be erased by applying a specific voltage to the chip.
DIP chips are physically soldered to a motherboard. SIMMs are chips that are soldered to a small board that is installed into a slot on the motherboard. Using SIMMs allows the memory to be easily replaced.
Be careful in the selection of memory upgrade components. The memory module must match the requirements of the motherboard.
The LIM memory specification renamed system memory to conventional memory, and defined additional (above the original 1024KB address limit) areas as expanded, extended, and high memory.
Don’t assume hardware first when you receive an illegal operation–These errors are nearly always caused by an application error.
Before using MEMMAKER, a PC should be configured for normal operation (i.e. mouse driver, network operation, sound support, and so forth), including any items that are loaded from the AUTOEXEC.BAT and CONFIG.SYS files.
Virtual memory is memory that the processor has been "tricked" into using as if it were actual physical memory.
Under Windows 95, the only way to fix system resource errors in the short term is to reboot the PC. The best long-term fix is to contact the manufacturer for a patch or upgrade the application to a 32-bit version.
MSD.EXE roots out almost every conceivable item about your system that you’d ever want to know (and then some!) and displays it in a menu-driven format for you to browse.
Memory conflicts arising from optional devices, such as network cards, can be difficult to pinpoint.
PCI was developed to increase the speed of data transfer between a peripheral and the processor.
Windows 32-bit applications don’t present the problems that Windows 16-bit applications might.
Memory managers can be forced to exclude specific areas to avoid conflicts, and through some careful analysis, a majority of memory conflicts can be resolved.
EMM386.EXE performs two major functions. It enables and controls EMS, if desired, and enables the use of upper memory as system memory.
Physical memory consists of the hardware that handles memory in a PC. This memory is stored in chips that are either ROM–read only memory chips, or RAM–random access memory.
MEM.EXE is a simple command line utility that, using various command switches, can display various reports of memory usage
Windows 95 has now pretty much eliminated the need for memory managers.
The Intel 80X86 (286, 386, 486, Pentium, Pentium Pro, Pentium II) series of processors are designed to interrupt the execution of a program whenever they detect an abnormal condition.